24 research outputs found
Bidirectional Conditional Generative Adversarial Networks
Conditional Generative Adversarial Networks (cGANs) are generative models
that can produce data samples () conditioned on both latent variables ()
and known auxiliary information (). We propose the Bidirectional cGAN
(BiCoGAN), which effectively disentangles and in the generation process
and provides an encoder that learns inverse mappings from to both and
, trained jointly with the generator and the discriminator. We present
crucial techniques for training BiCoGANs, which involve an extrinsic factor
loss along with an associated dynamically-tuned importance weight. As compared
to other encoder-based cGANs, BiCoGANs encode more accurately, and utilize
and more effectively and in a more disentangled way to generate
samples.Comment: To appear in Proceedings of ACCV 201
On d-Holomorphic Connections
We develop the theory of d-holomorphic connections on d-holomorphic vector
bundles over a Klein surface by constructing the analogous Atiyah exact
sequence for d-holomorphic bundles. We also give a criterion for the existence
of d-holomorphic connection in d-holomorphic bundle over a Klein surface in the
spirit of the Atiyah-Weil criterion for holomorphic connections
CapsuleGAN: Generative Adversarial Capsule Network
We present Generative Adversarial Capsule Network (CapsuleGAN), a framework
that uses capsule networks (CapsNets) instead of the standard convolutional
neural networks (CNNs) as discriminators within the generative adversarial
network (GAN) setting, while modeling image data. We provide guidelines for
designing CapsNet discriminators and the updated GAN objective function, which
incorporates the CapsNet margin loss, for training CapsuleGAN models. We show
that CapsuleGAN outperforms convolutional-GAN at modeling image data
distribution on MNIST and CIFAR-10 datasets, evaluated on the generative
adversarial metric and at semi-supervised image classification.Comment: To appear in Proceedings of ECCV Workshop on Brain Driven Computer
Vision (BDCV) 201
Performance Evaluation of Fine-tuned Faster R-CNN on specific MS COCO Objects
Fine-tuning of a model is a method that is most often required to cater to the users’ explicit requirements. But the question remains whether the model is accurate enough to be used for a certain application. This paper strives to present the metrics used for performance evaluation of a Convolutional Neural Network (CNN) model. The evaluation is based on the training process which provides us with intermediate models after every 1000 iterations. While 1000 iterations are not substantial enough over the range of 490k iterations, the groups are sized with 100k iterations each. Now, the intention was to compare the recorded metrics to evaluate the model in terms of accuracy. The training model used the set of specific categories chosen from the Microsoft Common Objects in Context (MS COCO) dataset while allowing the users to use their externally available images to test the model’s accuracy. Our trained model ensured that all the objects are detected that are present in the image to depict the effect of precision
Invariant Representations through Adversarial Forgetting
We propose a novel approach to achieving invariance for deep neural networks
in the form of inducing amnesia to unwanted factors of data through a new
adversarial forgetting mechanism. We show that the forgetting mechanism serves
as an information-bottleneck, which is manipulated by the adversarial training
to learn invariance to unwanted factors. Empirical results show that the
proposed framework achieves state-of-the-art performance at learning invariance
in both nuisance and bias settings on a diverse collection of datasets and
tasks.Comment: To appear in Proceedings of the 34th AAAI Conference on Artificial
Intelligence (AAAI-20